摘要 :
Accountable ring signature (ARS), introduced by Xu and Yung (CARDIS 2004), combines many useful properties of ring and group signatures. In particular, the signer in an ARS scheme has the flexibility of choosing an ad hoc group of...
展开
Accountable ring signature (ARS), introduced by Xu and Yung (CARDIS 2004), combines many useful properties of ring and group signatures. In particular, the signer in an ARS scheme has the flexibility of choosing an ad hoc group of users, and signing on their behalf (like a ring signature). Furthermore, the signer can designate an opener who may later reveal his identity, if required (like a group signature). In 2015, Bootle et al. (ESORICS 2015) formalized the notion and gave an efficient construction for ARS with signature-size logarithmic in the size of the ring. Their scheme is proven to be secure in the random oracle model. Recently, Russell et al. (ESORICS 2016) gave a construction with constant signature-size that is secure in the standard model. Their scheme is based on q-type assumptions (q-SDH). In this paper, we give a new construction for ARS having the following properties: signature is constant-sized, secure in the standard model, and based on indistinguishability obfuscation (iO) and one-way functions. To the best of our knowledge, this is the first iO-based ARS scheme. Independent of this, our work can be viewed as a new application of puncturable programming and hidden sparse trigger techniques introduced by Sahai and Waters (STOC 2014) to design iO-based deniable encryption.
收起
摘要 :
In Eurocrypt 2016, Kiayias, Zhou and Zikas (KZZ) have designed a multiparty protocol for computing an arbitrary function, which they prove to be secure in the malicious model with identifiable abort supporting robustness property....
展开
In Eurocrypt 2016, Kiayias, Zhou and Zikas (KZZ) have designed a multiparty protocol for computing an arbitrary function, which they prove to be secure in the malicious model with identifiable abort supporting robustness property. In their algorithm, the total transaction verification time has turned out to be O(n~6), where n is the number of parties participating in the protocol. The main contribution of this paper is the improvement of their verification time to O(n~3 log n). We achieve this by observing that a deposit transaction created by a party in KZZ can be generated simply from the information contained in a different deposit transaction. This observation coupled with a host of novel techniques for addition and elimination of elements on a set relevant for our protocol is primarily the reason we were able to improve the verification time complexity of the KZZ protocol. Our trick can potentially be applied to speed up many other similar protocols (as much as it is prohibitive in some other specific scenarios). We compare our protocol with the others, based on various performance and security parameters, and, finally discuss the feasibility of implementing this in the Ethereum platform.
收起
摘要 :
The Chamberlin-Courant voting rule is an important multi-winner voting rule. Although NP-hard to compute on general profiles, it is known to be polynomially solvable on single-crossing and single-peaked electorates by exploiting t...
展开
The Chamberlin-Courant voting rule is an important multi-winner voting rule. Although NP-hard to compute on general profiles, it is known to be polynomially solvable on single-crossing and single-peaked electorates by exploiting the structures of these domains. We consider the problem of generalizing the domain on which the voting rule admits efficient algorithms. On the one hand, we show efficient algorithms on profiles that are k candidates or k voters away from the single-peaked and single-crossing domains. In particular, for profiles that are k candidates away from being single-peaked or single-crossing, we show algorithms whose running time is FPT in k. For profiles that are k voters away from being single-peaked or single-crossing, our algorithms are XP in k. These algorithms are obtained by a careful extension of known algorithms on structured profiles [2, 12]. This provides a natural application for the work by Elkind and Lackner in [9], who study the problem of finding deletion sets to single-peaked and single-crossing profiles. In contrast to these results, for a different, but equally natural way of generalizing these domain, we show severe intractability results. In particular, we show that the problem is NP-hard on profiles that can be "decomposed" into a constant number of single-peaked profiles. Also, if the number of crossings per pair of candidates in a profile is permitted to be at most three (instead of one), the problem continues be NP-hard. This stands in contrast with other attempts at generalizing these domains (such as single-peaked or single-crossing width), as it rules out the possibility of fixed-parameter (or even XP) algorithms when parameterized by the number of peaks, or the maximum number of crossings per candidate pair.
收起
摘要 :
Deep neural networks (DNNs) are increasingly used for real-time inference, requiring low latency, but require significant computational power as they continue to increase in complexity. Edge clouds promise to offer lower latency d...
展开
Deep neural networks (DNNs) are increasingly used for real-time inference, requiring low latency, but require significant computational power as they continue to increase in complexity. Edge clouds promise to offer lower latency due to their proximity to end users and having powerful accelerators like GPUs to provide the computation power needed for DNNs. But it is also important to ensure that the edge-cloud resources are utilized well. For this, multiplexing several DNN models through spatial sharing of the GPU can substantially improve edge-cloud resource usage. Typical GPU runtime environments have significant interactions with the CPU, to transfer data to the GPU, for CPU-GPU synchronization on inference task completions, etc. These result in overheads. We present a DNN inference framework with a set of software primitives that reduce the overhead for DNN inference, increase GPU utilization and improve performance, with lower latency and higher throughput. Our first primitive uses the GPU DMA effectively, reducing the CPU cycles spent to transfer the data to the GPU. A second primitive uses asynchronous ‘events' for faster task completion notification. GPU runtimes typically preclude fine-grained user control on GPU resources, causing long GPU downtimes when adjusting resources. Our third primitive supports overlapping of model-loading and execution, thus allowing GPU resource re-allocation with very little GPU idle time. Our other primitives increase inference throughput by improving scheduling and processing more requests. Overall, our primitives decrease inference latency by more than 35% and increase DNN throughput by 2-3x.
收起
摘要 :
Deep neural networks (DNNs) are increasingly used for real-time inference, requiring low latency, but require significant computational power as they continue to increase in complexity. Edge clouds promise to offer lower latency d...
展开
Deep neural networks (DNNs) are increasingly used for real-time inference, requiring low latency, but require significant computational power as they continue to increase in complexity. Edge clouds promise to offer lower latency due to their proximity to end users and having powerful accelerators like GPUs to provide the computation power needed for DNNs. But it is also important to ensure that the edge-cloud resources are utilized well. For this, multiplexing several DNN models through spatial sharing of the GPU can substantially improve edge-cloud resource usage. Typical GPU runtime environments have significant interactions with the CPU, to transfer data to the GPU, for CPU-GPU synchronization on inference task completions, etc. These result in overheads. We present a DNN inference framework with a set of software primitives that reduce the overhead for DNN inference, increase GPU utilization and improve performance, with lower latency and higher throughput. Our first primitive uses the GPU DMA effectively, reducing the CPU cycles spent to transfer the data to the GPU. A second primitive uses asynchronous ‘events' for faster task completion notification. GPU runtimes typically preclude fine-grained user control on GPU resources, causing long GPU downtimes when adjusting resources. Our third primitive supports overlapping of model-loading and execution, thus allowing GPU resource re-allocation with very little GPU idle time. Our other primitives increase inference throughput by improving scheduling and processing more requests. Overall, our primitives decrease inference latency by more than 35% and increase DNN throughput by 2-3x.
收起
摘要 :
In this project, we focus on developing wheat harvester for small farms of India. Farming has become efficient today since farmers have started adopting modern technologies such as tractors, threshers, etc. Still, farmers having s...
展开
In this project, we focus on developing wheat harvester for small farms of India. Farming has become efficient today since farmers have started adopting modern technologies such as tractors, threshers, etc. Still, farmers having small farms in India cannot afford such machines. Also, these machines are not designed to work on small farms. So, in small farms, manual labor is used for basic farming operations such as harvesting. It costs farmers concerning both, the money and the time. Also, people working as labor in such farms develop health issues such as back pain as they need to sit in an awkward position on the ground while harvesting. In our research, we have noticed that around 20 people are required to harvest the wheat in around 0.25-acre land in 1 day. So, this wheat harvester has the potential to reduce the labor force required tremendously.
收起
摘要 :
Given a social network, where each user is associated with a selection cost, the problem of Budgeted Influence Maximization (BIM Problem) asks to choose a subset of them (known as seed users) within the allocated budget whose init...
展开
Given a social network, where each user is associated with a selection cost, the problem of Budgeted Influence Maximization (BIM Problem) asks to choose a subset of them (known as seed users) within the allocated budget whose initial activation leads to the maximum number of influenced nodes. In reality, the influence probability between two users depends upon the context (i.e., tags). However, existing studies on this problem do not consider the tag specific influence probability. To address this issue, in this paper we introduce the Tag-Based Budgeted Influence Maximization Problem (TBIM Problem), where along with the other inputs, a tag set (each of them is also associated with a selection cost) is given, each edge of the network has the tag specific influence probability, and here the goal is to select influential users as well as influential tags within the allocated budget to maximize the influence. Considering the fact that different tag has different popularity across the communities of the same network, we propose three methodologies that work based on effective marginal influence gain computation. The proposed methodologies have been analyzed for their time and space requirements. We evaluate the methodologies with three datasets, and observe, that these can select seed nodes and influential tags, which leads to more number of influenced nodes compared to the baseline methods.
收起
摘要 :
Channel confluence is an essential part of a river system. A better understanding of the confluence hydraulics is required to study the fluvial dynamics, irrigation and drainage network, pollution dispersion and sediment transport...
展开
Channel confluence is an essential part of a river system. A better understanding of the confluence hydraulics is required to study the fluvial dynamics, irrigation and drainage network, pollution dispersion and sediment transport. Channel confluence hydraulics is three-dimensional (3D) and is associated with complicated flow features.
收起
摘要 :
We consider the generic deep image enhancement problem where an input image is transformed into a perceptually better-looking image. The methods mostly fall into two categories: training with prior examples methods and training wi...
展开
We consider the generic deep image enhancement problem where an input image is transformed into a perceptually better-looking image. The methods mostly fall into two categories: training with prior examples methods and training with no-prior examples methods. Recently, Deep Internal Learning solutions to image enhancement in training with no-prior examples setup are gaining attention. We perform image enhancement using a deep internal learning framework. Our Deep Internal Learning for Image Enhancement framework (DILIE) enhances content features and style features and preserves semantics in the enhanced image. To validate the results, we use structure similarity and perceptual error, which is efficient in measuring the unrealistic deformation present in the images. We show that DILIE framework outputs good quality images for hazy and noisy image enhancement tasks.
收起
摘要 :
We consider the generic deep image enhancement problem where an input image is transformed into a perceptually better-looking image. The methods mostly fall into two categories: training with prior examples methods and training wi...
展开
We consider the generic deep image enhancement problem where an input image is transformed into a perceptually better-looking image. The methods mostly fall into two categories: training with prior examples methods and training with no-prior examples methods. Recently, Deep Internal Learning solutions to image enhancement in training with no-prior examples setup are gaining attention. We perform image enhancement using a deep internal learning framework. Our Deep Internal Learning for Image Enhancement framework (DILIE) enhances content features and style features and preserves semantics in the enhanced image. To validate the results, we use structure similarity and perceptual error, which is efficient in measuring the unrealistic deformation present in the images. We show that DILIE framework outputs good quality images for hazy and noisy image enhancement tasks.
收起